Goto

Collaborating Authors

 mathematical optimization


Landscape Surrogate: Learning Decision Losses for Mathematical Optimization Under Partial Information

Neural Information Processing Systems

Recent works in learning-integrated optimization have shown promise in settings where the optimization problem is only partially observed or where general-purpose optimizers perform poorly without expert tuning. By learning an optimizer $\mathbf{g}$ to tackle these challenging problems with $f$ as the objective, the optimization process can be substantially accelerated by leveraging past experience. The optimizer can be trained with supervision from known optimal solutions or implicitly by optimizing the compound function $f\circ \mathbf{g}$. The implicit approach may not require optimal solutions as labels and is capable of handling problem uncertainty; however, it is slow to train and deploy due to frequent calls to optimizer $\mathbf{g}$ during both training and testing. The training is further challenged by sparse gradients of $\mathbf{g}$, especially for combinatorial solvers.


Coherent Local Explanations for Mathematical Optimization

Otto, Daan, Kurtz, Jannis, Birbil, S. Ilker

arXiv.org Artificial Intelligence

The surge of explainable artificial intelligence methods seeks to enhance transparency and explainability in machine learning models. At the same time, there is a growing demand for explaining decisions taken through complex algorithms used in mathematical optimization. However, current explanation methods do not take into account the structure of the underlying optimization problem, leading to unreliable outcomes. In response to this need, we introduce Coherent Local Explanations for Mathematical Optimization (CLEMO). CLEMO provides explanations for multiple components of optimization models, the objective value and decision variables, which are coherent with the underlying model structure. Our sampling-based procedure can provide explanations for the behavior of exact and heuristic solution algorithms. The effectiveness of CLEMO is illustrated by experiments for the shortest path problem, the knapsack problem, and the vehicle routing problem.


Landscape Surrogate: Learning Decision Losses for Mathematical Optimization Under Partial Information

Neural Information Processing Systems

Recent works in learning-integrated optimization have shown promise in settings where the optimization problem is only partially observed or where general-purpose optimizers perform poorly without expert tuning. By learning an optimizer \mathbf{g} to tackle these challenging problems with f as the objective, the optimization process can be substantially accelerated by leveraging past experience. The optimizer can be trained with supervision from known optimal solutions or implicitly by optimizing the compound function f\circ \mathbf{g} . The implicit approach may not require optimal solutions as labels and is capable of handling problem uncertainty; however, it is slow to train and deploy due to frequent calls to optimizer \mathbf{g} during both training and testing. The training is further challenged by sparse gradients of \mathbf{g}, especially for combinatorial solvers.


You (also) need Mathematical Optimization in your organization … now!

#artificialintelligence

I will tell you the story of Adam*. Adam is a truck dispatcher, working in a distribution warehouse. His daily job is to assign a few hundred daily orders to trucks, so that they can be delivered to their customers on time. He has been working on this for 10 years and it is very hard to replace him (even when he is sick) as he knows the customers, orders and trucking companies quite well. Adam needs a secondary monitor, as he needs to continuously work with order data and truck data simultaneously while checking distances and driving durations on the map. It costs the company €200, but helps Adam work much more efficiently, reducing the time he needs to switch between windows on his computer.


3 Ways That Mathematical Optimization Can Be Used to Improve Machine Learning Applications - Gurobi

#artificialintelligence

My career as a practitioner and researcher in the data science space has spanned more than 30 years, and during that time I have seen a lot of new advanced analytics technologies – which were touted as "the latest and greatest," "cutting-edge," or "game-changing" or another similar superlative – sizzle and then fizzle. The hype cycles (as Gartner calls them) of these technologies were short – as they failed to deliver real-world business impact and attain long-term commercial viability. One advanced analytics technology that bucks that trend and has been around ever since I entered the professional arena in the early 1990s (and actually long before that with the introduction of linear programming in the 1940s) is mathematical optimization. For decades, mathematical optimization has been widely used by companies of all sizes and stripes to address their complex business problems. The secret to mathematical optimization's staying power is that it has consistently demonstrated that it is capable of generating optimal solutions to large-scale, real-world business problems – and has thereby produced significant business value.


GitHub - openopt/copt: A Python library for mathematical optimization

#artificialintelligence

Its goal is to provide a high quality implementation of classical optimization algorithms under a consistent API. This will create a copt directory. Now you can run the tests with py.test tests/


Artificial Intelligence Upskills Software via Mathematics - ASME

#artificialintelligence

Fusing artificial intelligence with mathematical optimization will dramatically increase the "brainpower" for the task at hand, whether it's optimizing flight patterns or bringing energy and food to underserved areas. That's the word from the academic researchers who are part of a new interdisciplinary institute that aims to integrate the two fields. The National AI Institute for Advances in Optimization (AI4OPT) is led by a multidisciplinary team from six U.S. universities, including computer science and civil, environmental, electrical, and computer engineering professors. The combined methods will foster no less than a "paradigm shift" in optimization, said Pascal Van Hentenryck, professor of industrial and systems engineering at Georgia Tech and institute lead. According to Hentenryck, tackling problems at the scale and complexity faced by society today requires a fusion of optimization and machine learning, with the two technologies working hand-in-hand.


Council Post: Four Key Differences Between Mathematical Optimization And Machine Learning

#artificialintelligence

Edward Rothberg is CEO and Co-Founder of Gurobi Optimization, which produces the world's fastest mathematical optimization solver. This is a question that -- as the CEO of a mathematical optimization software company -- I get asked all the time. Although it seems like a simple question, it's actually quite difficult to come up with a concise, coherent answer. Indeed, mathematical optimization and machine learning are two tools that at first glance -- like scissors and pliers -- may seem to have a lot in common. When you look closely at their fundamental features and actual applications, however, you'll see some important differences.


Linear Programming for Data Science and Business Analysis

#artificialintelligence

In this course you will learn all about the mathematical optimization of linear programming for data science and business analytics. This course is very unique and have its own importance in their respective disciplines. The data science and business study heavily rely on optimization. Optimization is the study of analysis and interpreting mathematical data under the special rules and formula. The length of the course is more than 6 hours and there are total more than 4 sections in this course.


Experts Join Rensselaer-IBM Artificial Intelligence Research Collaboration

#artificialintelligence

"The addition of these faculty is expanding our interdisciplinary cohort of AI researchers across the entire campus. We expect these four outstanding faculty members are the first wave of hires who will increase our capabilities for AI and machine learning research across all five of Rensselaer's schools," said James Hendler, director of the AIRC, and a Rensselaer Tetherless World Professor of Computer, Web, and Cognitive Science. The Rensselaer-IBM AIRC is dedicated to advancing the science of artificial intelligence and enabling the use of AI and machine learning in research investigations, innovations, and applications of joint interest to both Rensselaer and IBM. The collaboration fosters the growth of AI and machine learning capabilities through faculty hires, by funding specific research initiatives, and through funding top graduate students as IBM AI Horizons fellows. For more information about the AIRC, watch this video.